在生物医学数据中寻求预测模型时,人们通常会想到一个目标,例如,具有高精度和低复杂性(以促进可解释性)。我们在此研究是否可以通过我们最近提出的协调算法,安全(解决方案和健身进化)动态调整多个目标。我们发现,与配子工具生成的复杂模拟遗传数据集相比,与标准进化算法相比,Safe能够自动调整精度和复杂性,而无需损失,而没有性能损失。
translated by 谷歌翻译
我们最近提出了安全的 - 解决方案和健身进化 - 一种相应的协调算法,该算法维持两个共同发展的人群:候选解决方案和候选目标函数的种群。我们表明,安全在机器人迷宫领域内发展溶液的成功。本文中,我们介绍了Safe的适应和对多目标问题的应用的研究,其中候选目标功能探索了每个目标的不同权重。尽管初步的结果表明,安全以及共同发展的解决方案和目标功能的概念可以识别一组类似的最佳多物镜解决方案,而无需显式使用帕累托前锋进行健身计算和父母选择。这些发现支持我们的假设,即安全算法概念不仅可以解决复杂的问题,而且可以适应多个目标问题的挑战。
translated by 谷歌翻译
最近,我们强调了一个基本问题,该问题被认为是混淆算法优化的,即\ textit {Confing}与目标函数的目标。即使前者的定义很好,后者也可能并不明显,例如,在学习一种策略来导航迷宫以找到目标(客观)时,有效的目标函数\ textit {评估}策略可能不是一个简单的功能到目标的距离。我们建议自动化可能发现良好的目标功能的手段 - 此处得到的建议。我们提出\ textbf {s} iolution \ textbf {a} nd \ textbf {f} itness \ textbf {e} volution(\ textbf {safe}),a \ textit {comensalistic} coovolutionary algorithm候选解决方案和一系列候选目标功能。作为此概念原理的证明,我们表明安全不仅成功地发展了机器人迷宫领域内的解决方案,而且还可以在进化过程中衡量解决方案质量所需的目标函数。
translated by 谷歌翻译
机器学习(ML)提供了在具有较大特征空间和复杂关联的数据中通常在数据中检测和建模关联的强大方法。已经开发了许多有用的工具/软件包(例如Scikit-learn),以使数据处理,处理,建模和解释的各种要素可访问。但是,对于大多数研究人员来说,将这些元素组装成严格,可复制,无偏见和有效的数据分析管道并不是微不足道的。自动化机器学习(AUTOML)试图通过简化所有人的ML分析过程来解决这些问题。在这里,我们介绍了一个简单,透明的端到端汽车管道,设计为一个框架,以轻松进行严格的ML建模和分析(最初限于二进制分类)。 Streamline专门设计用于比较数据集,ML算法和其他AutoML工具之间的性能。通过使用精心设计的一系列管道元素,通过提供完全透明且一致的比较基线,它是独特的,包括:(1)探索性分析,(2)基本数据清洁,(3)交叉验证分区,(4)数据缩放和插补,(5)基于滤波器的特征重要性估计,(6)集体特征选择,(7)通过15个已建立算法的“ Optuna”超参数优化的ML建模(包括较不知名的基因编程和基于规则的ML ),(8)跨16个分类指标的评估,(9)模型特征重要性估计,(10)统计显着性比较,以及(11)自动导出所有结果,图,PDF摘要报告以及可以轻松应用于复制数据。
translated by 谷歌翻译
We consider the problem of estimating a multivariate function $f_0$ of bounded variation (BV), from noisy observations $y_i = f_0(x_i) + z_i$ made at random design points $x_i \in \mathbb{R}^d$, $i=1,\ldots,n$. We study an estimator that forms the Voronoi diagram of the design points, and then solves an optimization problem that regularizes according to a certain discrete notion of total variation (TV): the sum of weighted absolute differences of parameters $\theta_i,\theta_j$ (which estimate the function values $f_0(x_i),f_0(x_j)$) at all neighboring cells $i,j$ in the Voronoi diagram. This is seen to be equivalent to a variational optimization problem that regularizes according to the usual continuum (measure-theoretic) notion of TV, once we restrict the domain to functions that are piecewise constant over the Voronoi diagram. The regression estimator under consideration hence performs (shrunken) local averaging over adaptively formed unions of Voronoi cells, and we refer to it as the Voronoigram, following the ideas in Koenker (2005), and drawing inspiration from Tukey's regressogram (Tukey, 1961). Our contributions in this paper span both the conceptual and theoretical frontiers: we discuss some of the unique properties of the Voronoigram in comparison to TV-regularized estimators that use other graph-based discretizations; we derive the asymptotic limit of the Voronoi TV functional; and we prove that the Voronoigram is minimax rate optimal (up to log factors) for estimating BV functions that are essentially bounded.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance. While this capability has been demonstrated in other fields such as computer vision, natural language processing or speech recognition, it remains to be shown in robotics, where the generalization capabilities of the models are particularly critical due to the difficulty of collecting real-world robotic data. We argue that one of the keys to the success of such general robotic models lies with open-ended task-agnostic training, combined with high-capacity architectures that can absorb all of the diverse, robotic data. In this paper, we present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties. We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks. The project's website and videos can be found at robotics-transformer.github.io
translated by 谷歌翻译
Many dynamical systems exhibit latent states with intrinsic orderings such as "ally", "neutral" and "enemy" relationships in international relations. Such latent states are evidenced through entities' cooperative versus conflictual interactions which are similarly ordered. Models of such systems often involve state-to-action emission and state-to-state transition matrices. It is common practice to assume that the rows of these stochastic matrices are independently sampled from a Dirichlet distribution. However, this assumption discards ordinal information and treats states and actions falsely as order-invariant categoricals, which hinders interpretation and evaluation. To address this problem, we propose the Ordered Matrix Dirichlet (OMD): rows are sampled conditionally dependent such that probability mass is shifted to the right of the matrix as we move down rows. This results in a well-ordered mapping between latent states and observed action types. We evaluate the OMD in two settings: a Hidden Markov Model and a novel Bayesian Dynamic Poisson Tucker Model tailored to political event data. Models built on the OMD recover interpretable latent states and show superior forecasting performance in few-shot settings. We detail the wide applicability of the OMD to other domains where models with Dirichlet-sampled matrices are popular (e.g. topic modeling) and publish user-friendly code.
translated by 谷歌翻译
Diffusion models have emerged as the state-of-the-art for image generation, among other tasks. Here, we present an efficient diffusion-based model for 3D-aware generation of neural fields. Our approach pre-processes training data, such as ShapeNet meshes, by converting them to continuous occupancy fields and factoring them into a set of axis-aligned triplane feature representations. Thus, our 3D training scenes are all represented by 2D feature planes, and we can directly train existing 2D diffusion models on these representations to generate 3D neural fields with high quality and diversity, outperforming alternative approaches to 3D-aware generation. Our approach requires essential modifications to existing triplane factorization pipelines to make the resulting features easy to learn for the diffusion model. We demonstrate state-of-the-art results on 3D generation on several object classes from ShapeNet.
translated by 谷歌翻译
DeepAngle is a machine learning-based method to determine the contact angles of different phases in the tomography images of porous materials. Measurement of angles in 3--D needs to be done within the surface perpendicular to the angle planes, and it could become inaccurate when dealing with the discretized space of the image voxels. A computationally intensive solution is to correlate and vectorize all surfaces using an adaptable grid, and then measure the angles within the desired planes. On the contrary, the present study provides a rapid and low-cost technique powered by deep learning to estimate the interfacial angles directly from images. DeepAngle is tested on both synthetic and realistic images against the direct measurement technique and found to improve the r-squared by 5 to 16% while lowering the computational cost 20 times. This rapid method is especially applicable for processing large tomography data and time-resolved images, which is computationally intensive. The developed code and the dataset are available at an open repository on GitHub (https://www.github.com/ArashRabbani/DeepAngle).
translated by 谷歌翻译